6 research outputs found

    Extremely Low-light Image Enhancement with Scene Text Restoration

    Full text link
    Deep learning-based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved. However, we found out that most of these methods could not sufficiently recover the image details, for instance, the texts in the scene. In this paper, a novel image enhancement framework is proposed to precisely restore the scene texts, as well as the overall quality of the image simultaneously under extremely low-light images conditions. Mainly, we employed a self-regularised attention map, an edge map, and a novel text detection loss. In addition, leveraging synthetic low-light images is beneficial for image enhancement on the genuine ones in terms of text detection. The quantitative and qualitative experimental results have shown that the proposed model outperforms state-of-the-art methods in image restoration, text detection, and text spotting on See In the Dark and ICDAR15 datasets

    Cycle-Object Consistency for Image-to-Image Domain Adaptation

    No full text
    Recent advances in generative adversarial networks (GANs) have been proven effective in performing domain adaptation for object detectors through data augmentation. While GANs are exceptionally successful, those methods that can preserve objects well in the image-to-image translation task usually require an auxiliary task, such as semantic segmentation to prevent the image content from being too distorted. However, pixel-level annotations are difficult to obtain in practice. Alternatively, instance-aware image-translation model treats object instances and background separately. Yet, it requires object detectors at test time, assuming that off-the-shelf detectors work well in both domains. In this work, we present AugGAN-Det, which introduces Cycle-object Consistency (CoCo) loss to generate instance-aware translated images across complex domains. The object detector of the target domain is directly leveraged in generator training and guides the preserved objects in the translated images to carry target-domain appearances. Compared to previous models, which e.g., require pixel-level semantic segmentation to force the latent distribution to be object-preserving, this work only needs bounding box annotations which are significantly easier to acquire. Next, as to the instance-aware GAN models, our model, AugGAN-Det, internalizes global and object style-transfer without explicitly aligning the instance features. Most importantly, a detector is not required at test time. Experimental results demonstrate that our model outperforms recent object-preserving and instance-level models and achieves state-of-the-art detection accuracy and visual perceptual quality

    CyEDA : CYCLE OBJECT EDGE CONSISTENCY DOMAIN ADAPTATION

    No full text
    Despite the advent of domain adaptation methods, most of\ua0them still struggle in preserving the instance level details of\ua0images when performing global level translation. While there\ua0are instance level translation methods that can retain the instance level details well, most of them require either pre-trainobject detection/segmentation network and annotation labels.\ua0In this work, we propose a novel method namely CyEDA to perform global level domain adaptation that taking care of\ua0image contents without any pre-train networks integration or annotation labels. That is, we introduce masking and cycle-object edge consistency loss which exploit the preservation of\ua0image objects. We show that our approach is able to outperform\ua0other SOTAs in terms of image quality and FID score in\ua0both BDD100K and GTA datasets

    CyEDA : CYCLE OBJECT EDGE CONSISTENCY DOMAIN ADAPTATION

    No full text
    Despite the advent of domain adaptation methods, most of\ua0them still struggle in preserving the instance level details of\ua0images when performing global level translation. While there\ua0are instance level translation methods that can retain the instance level details well, most of them require either pre-trainobject detection/segmentation network and annotation labels.\ua0In this work, we propose a novel method namely CyEDA to perform global level domain adaptation that taking care of\ua0image contents without any pre-train networks integration or annotation labels. That is, we introduce masking and cycle-object edge consistency loss which exploit the preservation of\ua0image objects. We show that our approach is able to outperform\ua0other SOTAs in terms of image quality and FID score in\ua0both BDD100K and GTA datasets

    Extremely Low-light Image Enhancement with Scene Text Restoration

    No full text
    Deep learning based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved.\ua0However, we found out that most of these methods could not sufficiently recover the image details, for instance the texts inthe scene. In this paper, a novel image enhancement framework is proposed to specifically restore the scene texts, as well as the overall quality of the image simultaneously under extremely low-light images conditions. Particularly, we employed a selfregularised attention map, an edge map, and a novel text detection loss. The quantitative and qualitative experimental results have shown that the proposed model outperforms stateof-the-art methods in terms of image restoration, text detection,\ua0and text spotting on See In the Dark and ICDAR15 datasets
    corecore